1,383 research outputs found

    Vers la transcription automatique de gestes du soundpainting pour l'analyse de performances interactives

    Get PDF
    L'analyse objective et la documentation de performances interactives est souvent délicate car extrêmement complexe. Le Soundpainting, langage gestuel dédié à l'improvisation guidée de musiciens, d'acteurs, ou de danseurs, peut constituer un terrain privilégié pour cette analyse. Des gestes prédéfinis sont produits pour indiquer aux improvisateurs le type de matériel souhaité. La transcription des gestes en vue de la documentation de performances semble tout à fait réalisable mais très fastidieuse. Dans cet article, nous présentons un outil de reconnaissance automatique de gestes dédié à l'annotation d'une performance de soundpainting. Un premier prototype a été développé pour reconnaître les gestes filmé par une caméra de type Kinect. La transcription automatique de gestes pourrait ainsi mener à diverses applications, notamment l'analyse de la pratique du soundpainting en général, mais également la compréhension et la modélisation de performances musicales interactives

    Time-continuous Estimation of Emotion in Music with Recurrent Neural Networks

    Get PDF
    International audienceIn this paper, we describe the IRIT's approach used for the MediaEval 2015 "Emotion in Music" task. The goal was to predict two real-valued emotion dimensions, namely valence and arousal, in a time-continuous fashion. We chose to use recurrent neural networks (RNN) for their sequence modeling capabilities. Hyperparameter tuning was performed through a 10-fold cross-validation setup on the 431 songs of the development subset. With the baseline set of 260 acoustic features, our best system achieved averaged root mean squared errors of 0.250 and 0.238, and Pearson's correlation coefficients of 0.703 and 0.692, for valence and arousal, respectively. These results were obtained by first making predictions with an RNN comprised of only 10 hidden units, smoothed by a moving average filter, and used as input to a second RNN to generate the final predictions. This system gave our best results on the official test data subset for arousal (RMSE=0.247, r=0.588), but not for Valence. Valence predictions were much worse (RMSE=0.365, r=0.029). This may be explained by the fact that in the development subset, valence and arousal values were very correlated (r=0.626), and this was not the case with the test data. Finally, slight improvements over these figures were obtained by adding spectral atness and spectral valley features to the baseline set

    Sous-continents Estimation of Emotion in Music with Recurrent Neural Networks

    Get PDF
    In this paper, we describe the IRIT's approach used for the MediaEval 2015 "Emotion in Music" task. The goal was to predict two real-valued emotion dimensions, namely valence and arousal, in a time-continuous fashion. We chose to use recurrent neural networks (RNN) for their sequence modeling capabilities. Hyperparameter tuning was performed through a 10-fold cross-validation setup on the 431 songs of the development subset. With the baseline set of 260 acoustic features, our best system achieved averaged root mean squared errors of 0.250 and 0.238, and Pearson's correlation coefficients of 0.703 and 0.692, for valence and arousal, respectively. These results were obtained by first making predictions with an RNN comprised of only 10 hidden units, smoothed by a moving average filter, and used as input to a second RNN to generate the final predictions. This system gave our best results on the official test data subset for arousal (RMSE=0.247, r=0.588), but not for Valence. Valence predictions were much worse (RMSE=0.365, r=0.029). This may be explained by the fact that in the development subset, valence and arousal values were very correlated (r=0.626), and this was not the case with the test data. Finally, slight improvements over these figures were obtained by adding spectral atness and spectral valley features to the baseline set

    Cosine-similarity penalty to discriminate sound classes in weakly-supervised sound event detection.

    Get PDF
    The design of new methods and models when only weakly-labeled data are available is of paramount importance in order to reduce the costs of manual annotation and the considerable human effort associated with it. In this work, we address Sound Event Detection in the case where a weakly annotated dataset is available for training. The weak annotations provide tags of audio events but do not provide temporal boundaries. The objective is twofold: 1) audio tagging, i.e. multi-label classification at recording level, 2) sound event detection, i.e. localization of the event boundaries within the recordings. This work focuses mainly on the second objective. We explore an approach inspired by Multiple Instance Learning, in which we train a convolutional recurrent neural network to give predictions at frame-level, using a custom loss function based on the weak labels and the statistics of the frame-based predictions. Since some sound classes cannot be distinguished with this approach, we improve the method by penalizing similarity between the predictions of the positive classes during training. On the test set used in the DCASE 2018 challenge, consisting of 288 recordings and 10 sound classes, the addition of a penalty resulted in a localization F-score of 34.75%, and brought 10% relative improvement compared to not using the penalty. Our best model achieved a 26.20% F-score on the DCASE-2018 official Eval subset close to the 10-system ensemble approach that ranked second in the challenge with a 29.9% F-score

    El-WOZ: a client-server wizard-of-oz open-source interface

    Get PDF
    International audienceWizard of Oz (WOZ) prototyping employs a human wizard to simulate anticipated functions of a future system. In Natural Language Processing this method is usually used to obtain early feedback on dialogue designs, to collect language corpora, or to explore interaction strategies. Yet, existing tools often require complex client-server configurations and setup routines, or suffer from compatibility problems with different platforms. Integrated solutions, which may also be used by designers and researchers without technical background, are missing. In this paper we present a framework for multi-lingual dialog research, which combines speech recognition and synthesis with WOZ. All components are open source and adaptable toIn this paper, we present a speech recording interface developed in the context of a project on automatic speech recognition for elderly native speakers of European Portuguese. In order to collect spontaneous speech in a situation of interaction with a machine, this interface was designed as a Wizard-of-Oz (WOZ) plateform. In this setup, users interact with a fake automated dialog system controled by a human wizard. It was implemented as a client-server application and the subjects interact with a talking head. The human wizard chooses pre-defined questions or sentences in a graphical user interface, which are then synthesized and spoken aloud by the avatar on the client side. A small spontaneous speech corpus was collected in a daily center. Eight speakers between 75 and 90 years old were recorded. They appreciated the interface and felt at ease with the avatar. Manual orthographic transcriptions were created for the total of about 45 minutes of speech. different application scenario

    The Hot Interstellar Medium in Normal Elliptical Galaxies III: The Thermal Structure of the Gas

    Full text link
    This is the third paper in a series analyzing X-ray emission from the hot interstellar medium in a sample of 54 normal elliptical galaxies observed by Chandra, focusing on 36 galaxies with sufficient signal to compute radial temperature profiles. We distinguish four qualitatively different types of profile: positive gradient (outwardly rising), negative gradients (falling), quasi-isothermal (flat) and hybrid (falling at small radii, then rising). We measure the mean logarithmic temperature gradients in two radial regions: from 0--2 JJ-band effective radii RJR_J (excluding the central point source), and from 2--4RJ4R_J. We find the outer gradient to be uncorrelated with intrinsic host galaxy properties, but strongly influenced by the environment: galaxies in low-density environments tend to show negative outer gradients, while those in high-density environments show positive outer gradients, suggesting influence of circumgalactic hot gas. The inner temperature gradient is unaffected by the environment but strongly correlated with intrinsic host galaxy characteristics: negative inner gradients are more common for smaller, optically faint, low radio-luminosity galaxies, whereas positive gradients are found in bright galaxies with stronger radio sources. There is no evidence for bimodality in the distribution of inner or outer gradients. We propose three scenarios to explain the inner temperature gradients: (1) Weak AGN heat the ISM locally, higher-luminosity AGN heat the system globally through jets inflating cavities at larger radii; (2) The onset of negative inner gradients indicates a declining importance of AGN heating relative to other sources, such as compressional heating or supernovae; (3) The variety of temperature profiles are snapshots of different stages of a time-dependent flow.Comment: 18 pages, emulateapj, 55 figures (36 online-only figures included in astro-ph version), submitted to Ap

    Multilingual Audio Captioning using machine translated data

    Full text link
    Automated Audio Captioning (AAC) systems attempt to generate a natural language sentence, a caption, that describes the content of an audio recording, in terms of sound events. Existing datasets provide audio-caption pairs, with captions written in English only. In this work, we explore multilingual AAC, using machine translated captions. We translated automatically two prominent AAC datasets, AudioCaps and Clotho, from English to French, German and Spanish. We trained and evaluated monolingual systems in the four languages, on AudioCaps and Clotho. In all cases, the models achieved similar performance, about 75% CIDEr on AudioCaps and 43% on Clotho. In French, we acquired manual captions of the AudioCaps eval subset. The French system, trained on the machine translated version of AudioCaps, achieved significantly better results on the manual eval subset, compared to the English system for which we automatically translated the outputs to French. This advocates in favor of building systems in a target language instead of simply translating to a target language the English captions from the English system. Finally, we built a multilingual model, which achieved results in each language comparable to each monolingual system, while using much less parameters than using a collection of monolingual systems

    Unsupervised Speech Unit Discovery Using K-means and Neural Networks

    Get PDF
    Unsupervised discovery of sub-lexical units in speech is a problem that currently interests speech researchers. In this paper, we report experiments in which we use phone segmentation followed by clustering the segments together using k-means and a Convolutional Neural Network. We thus obtain an annotation of the corpus in pseudo-phones, which then allows us to find pseudo-words. We compare the results for two different segmentations: manual and automatic. To check the portability of our approach, we compare the results for three different languages (English, French and Xitsonga). The originality of our work lies in the use of neural networks in an unsupervised way that differ from the common method for unsupervised speech unit discovery based on auto-encoders. With the Xitsonga corpus, for instance, with manual and automatic segmentations, we were able to obtain 46% and 42% purity scores, respectively, at phone-level with 30 pseudo-phones. Based on the inferred pseudo-phones, we discovered about 200 pseudo-words

    Multitask learning in Audio Captioning: a sentence embedding regression loss acts as a regularizer

    Full text link
    In this work, we propose to study the performance of a model trained with a sentence embedding regression loss component for the Automated Audio Captioning task. This task aims to build systems that can describe audio content with a single sentence written in natural language. Most systems are trained with the standard Cross-Entropy loss, which does not take into account the semantic closeness of the sentence. We found that adding a sentence embedding loss term reduces overfitting, but also increased SPIDEr from 0.397 to 0.418 in our first setting on the AudioCaps corpus. When we increased the weight decay value, we found our model to be much closer to the current state-of-the-art methods, with a SPIDEr score up to 0.444 compared to a 0.475 score. Moreover, this model uses eight times less trainable parameters. In this training setting, the sentence embedding loss has no more impact on the model performance

    Killing two birds with one stone: Can an audio captioning system also be used for audio-text retrieval?

    Full text link
    Automated Audio Captioning (AAC) aims to develop systems capable of describing an audio recording using a textual sentence. In contrast, Audio-Text Retrieval (ATR) systems seek to find the best matching audio recording(s) for a given textual query (Text-to-Audio) or vice versa (Audio-to-Text). These tasks require different types of systems: AAC employs a sequence-to-sequence model, while ATR utilizes a ranking model that compares audio and text representations within a shared projection subspace. However, this work investigates the relationship between AAC and ATR by exploring the ATR capabilities of an unmodified AAC system, without fine-tuning for the new task. Our AAC system consists of an audio encoder (ConvNeXt-Tiny) trained on AudioSet for audio tagging, and a transformer decoder responsible for generating sentences. For AAC, it achieves a high SPIDEr-FL score of 0.298 on Clotho and 0.472 on AudioCaps on average. For ATR, we propose using the standard Cross-Entropy loss values obtained for any audio/caption pair. Experimental results on the Clotho and AudioCaps datasets demonstrate decent recall values using this simple approach. For instance, we obtained a Text-to-Audio R@1 value of 0.382 for Au-dioCaps, which is above the current state-of-the-art method without external data. Interestingly, we observe that normalizing the loss values was necessary for Audio-to-Text retrieval.Comment: cam ready version (14/08/23
    corecore